The attached QA workshop deck is not about manual validation process. It is about building an AI-assisted testing loop: write a strong prompt, generate meaningful tests, commit them with the feature, and let the pipeline produce requirement-tagged proof automatically.
The deck opens with a blunt case for change: in a regulated environment, manual testing creates timing problems and evidence problems at the same time.
Every code change can trigger a large manual rerun. The cost is not just time spent clicking through flows. It is the queue created before a release can move.
IQ/OQ style evidence and validation notes assembled by hand drift from the actual code and the actual run results. The more manual the package, the easier it is to miss something.
Linking requirements to tests through spreadsheets or after-the-fact mapping creates obvious failure points. The deck’s alternative is requirement tags embedded directly in test names and outputs.
Without automated gates, code can merge before tests actually prove the feature behavior. In the workshop model, no test means no merge.
The deck’s core workflow is a five-step chain from feature context to audit-ready PR output.
Name the feature, state the acceptance criteria, specify the test type, and call out the edge cases that matter.
Copilot or Claude Code produces runnable tests with the right structure, branch coverage targets, and requirement tags.
The test file lands in the same PR as the code so the link between behavior and validation is preserved at review time.
The pipeline executes automatically on the PR, collects artifacts, and enforces pass/fail gates before merge.
Requirement-tagged outputs, coverage, and reports become versioned artifacts on the same change record reviewers are already using.
The workshop’s prompt pattern is practical rather than abstract. It is designed to produce useful tests on the first pass, not generic boilerplate.
Name the feature and what it is supposed to do. In the workshop example, that is project-based time entry in the Hours Tracking proof of concept.
Reference the real AC from the story or spec. The model needs to know the exact pass/fail behavior, not just a paraphrased summary.
Say whether you want unit, integration, end-to-end, or regression tests. Otherwise the model has to guess both level and output format.
Call out nulls, invalid values, boundaries, and permission checks explicitly. The deck keeps returning to negative, null, and limit conditions.
If the tests must support audit review, include the requirement tag in the prompt so the generated file carries that metadata from the start.
Pick one requirement-tag format and never vary it by team or feature. Stable tags such as [REQ-HT-012] are what make PR summaries, reports, and grep-based traceability actually usable later.
The deck’s generated Jest example makes three expectations clear: requirement tags are built in, edge cases are included automatically, and the file should drop directly into the repo.
Tags such as [REQ-HT-012] appear directly in the describe block or test names so they can be surfaced later in reports and PR comments.
Negative values, null project codes, and boundary conditions like exactly 24 hours are expected outputs, not optional cleanup work for QA later.
The generated artifact should be a real Jest file or equivalent test asset that can be committed, executed, and reviewed immediately.
// [REQ-HT-012] Time Entry Validation Tests describe('validateTimeEntry', () => { it('accepts valid 7.5h entry', () => { expect(() => validate({ projectCode: 'A', hours: 7.5 })) .not.toThrow(); }); it('rejects negative hours', () => { expect(() => validate({ projectCode: 'A', hours: -1 })) .toThrow('Hours cannot be negative'); }); it('rejects daily total > 24h', () => { expect(() => validate({ projectCode: 'A', hours: 25 })) .toThrow('Daily total cannot exceed 24'); }); it('rejects null project code', () => { expect(() => validate({ projectCode: null, hours: 8 })) .toThrow('Project code required'); }); it('allows exactly 24h on one project', () => { expect(() => validate({ projectCode: 'A', hours: 24 })) .not.toThrow(); }); });
Page takeaway: the workshop treats test generation as structured authoring, not magic. If the prompt contains feature behavior, test scope, edge cases, and requirement tags, the AI can produce artifacts that are meaningful in both engineering review and audit review.
Phase 04 — Verification. AI-assisted test generation and PR gates are the verification layer of the SDLC.
Scroll right to see full pipeline →